In probability theory and mathematical physics, a random matrix is a matrix-valued random variable. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.
Contents |
In nuclear physics, random matrices were introduced by Eugene Wigner[1] to model the spectra of heavy atoms. He postulated that the spacings between the lines in the spectrum of a heavy atom should resemble the spacings between the eigenvalues of a random matrix, and should depend only on the symmetry class of the underlying evolution.[2] In solid-state physics, random matrices model the behaviour of large disordered Hamiltonians in the mean field approximation.
In quantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture[3] asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory.
Random matrix theory has also found applications to quantum gravity in two dimensions,[4] mesoscopic physics,[5] and more[6][7][8][9][10]
In multivariate statistics, random matrices were introduced by John Wishart for statistical analysis of large samples;[11] see estimation of covariance matrices.
Significant results have been shown that extend the classical scalar Chernoff, Bernstein, and Hoeffding inequalities to the largest eigenvalues of finite sums of random Hermitian matrices.[12] Corollary results are derived for the maximum singular values of rectangular matrices.
In numerical analysis, random matrices have been used since the work of John von Neumann and Herman Goldstine[13] to describe computation errors in operations such as matrix multiplication. See also[14] for more recent results.
In number theory, the distribution of zeros of the Riemann zeta function (and other L-functions) is modelled by the distribution of eigenvalues of certain random matrices.[15] The connection was first discovered by Hugh Montgomery and Freeman J. Dyson. It is connected to the Hilbert–Pólya conjecture.
The most studied random matrix ensembles are the Gaussian ensembles.
The Gaussian unitary ensemble GUE(n) is described by the Gaussian measure with density
on the space of n × n Hermitian matrices H = (Hij)n
i,j=1. Here ZGUE(n) = 2n/2 πn2/2 is a normalisation constant, chosen so that the integral of the density is equal to one. The term unitary refers to the fact that the distribution is invariant under unitary conjugation. The Gaussian unitary ensemble models Hamiltonians lacking time-reversal symmetry.
The Gaussian orthogonal ensemble GOE(n) is described by the Gaussian measure with density
on the space of n × n real symmetric matrices H = (Hij)n
i,j=1. Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry.
The Gaussian symplectic ensemble GSE(n) is described by the Gaussian measure with density
on the space of n × n quaternionic Hermitian matrices H = (Hij)n
i,j=1. Its distribution is invariant under conjugation by the symplectic group, and it models Hamiltonians with time-reversal symmetry but no rotational symmetry.
The joint probability density for the eigenvalues λ1,λ2,...,λn of GUE/GOE/GSE is given by
where β = 1 for GOE, β = 2 for GUE, and β = 4 for GSE; Zβ,n is a normalisation constant which can be explicitly computed, see Selberg integral. In the case of GUE (β = 2), the formula (1) describes a determinantal point process.
Wigner matrices are random Hermitian matrices such that the entries
above the main diagonal are independent random variables with zero mean, and
have identical second moments.
Invariant matrix ensembles are random Hermitian matrices with density on the space of real symmetric/ Hermitian/ quaternionic Hermitian matrices, which is of the form where the function V is called the potential.
The Gaussian ensembles are the only common special cases of these two classes of random matrices.
The spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes to infinity.
In the global regime, one is interested in the distribution of linear statistics of the form Nf, H = n-1 tr f(H).
The empirical spectral measure μH of H is defined by
Usually, the limit of is a deterministic measure; this is a particular case of self-averaging. The cumulative distribution function of the limiting measure is called the integrated density of states and is denoted N(λ). If the integrated density of states is differentiable, its derivative is called the density of states and is denoted ρ(λ).
The limit of the empirical spectral measure for Wigner matrices was described by Eugene Wigner, see Wigner's law. A more general theory was developed by Marchenko and Pastur [16][17]
The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equation which arises from potential theory.[18]
For the linear statistics Nf,H = n−1 ∑ f(λj), one is also interested in the fluctuations about ∫ f(λ) dN(λ). For many classes of random matrices, a central limit theorem of the form
In the local regime, one is interested in the spacings between eigenvalues, and, more generally, in the joint distribution of eigenvalues in an interval of length of order 1/n. One distinguishes between bulk statistics, pertaining to intervals inside the support of the limiting spectral measure, and edge statistics, pertaining to intervals near the boundary of the support.
Formally, fix λ0 in the interior of the support of N(λ). Then consider the point process
where λj are the eigenvalues of the random matrix.
The point process Ξ(λ0) captures the statistical properties of eigenvalues in the vicinity of λ0. For the Gaussian ensembles, the limit of Ξ(λ0) is known;[2] thus, for GUE it is a determinantal point process with the kernel
(the sine kernel).
The universality principle postulates that the limit of Ξ(λ0) as n → ∞ should depend only on the symmetry class of the random matrix (and neither on the specific model of random matrices nor on λ0). This was rigorously proved for several models of random matrices: for invariant matrix ensembles,[21][22] for Wigner matrices,[23][24] et cet.
Wishart matrices are n × n random matrices of the form H = X X*, where X is an n × n random matrix with independent entries, and X* is its conjugate matrix. In the important special case considered by Wishart, the entries of X are identically distributed Gaussian random variables (either real or complex).
The limit of the empirical spectral measure of Wishart matrices was found[16] by Vladimir Marchenko and Leonid Pastur, see Marchenko–Pastur distribution.
See circular law.